38 research outputs found

    Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes

    Full text link
    This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent α\alpha. The minimax redundancy of exponentially decreasing envelope classes is proved to be equivalent to 14αlogelog2n\frac{1}{4 \alpha \log e} \log^2 n. Then a coding strategy is proposed, with a Bayes redundancy equivalent to the maximin redundancy. At last, an adaptive algorithm is provided, whose redundancy is equivalent to the minimax redundanc

    Bernstein von Mises Theorems for Gaussian Regression with increasing number of regressors

    Full text link
    This paper brings a contribution to the Bayesian theory of nonparametric and semiparametric estimation. We are interested in the asymptotic normality of the posterior distribution in Gaussian linear regression models when the number of regressors increases with the sample size. Two kinds of Bernstein-von Mises Theorems are obtained in this framework: nonparametric theorems for the parameter itself, and semiparametric theorems for functionals of the parameter. We apply them to the Gaussian sequence model and to the regression of functions in Sobolev and CαC^{\alpha} classes, in which we get the minimax convergence rates. Adaptivity is reached for the Bayesian estimators of functionals in our applications

    Clustering and variable selection for categorical multivariate data

    Get PDF
    This article investigates unsupervised classification techniques for categorical multivariate data. The study employs multivariate multinomial mixture modeling, which is a type of model particularly applicable to multilocus genotypic data. A model selection procedure is used to simultaneously select the number of components and the relevant variables. A non-asymptotic oracle inequality is obtained, leading to the proposal of a new penalized maximum likelihood criterion. The selected model proves to be asymptotically consistent under weak assumptions on the true probability underlying the observations. The main theoretical result obtained in this study suggests a penalty function defined to within a multiplicative parameter. In practice, the data-driven calibration of the penalty function is made possible by slope heuristics. Based on simulated data, this procedure is found to improve the performance of the selection procedure with respect to classical criteria such as BIC and AIC. The new criterion provides an answer to the question "Which criterion for which sample size?" Examples of real dataset applications are also provided

    About adaptive coding on countable alphabets

    Get PDF
    This paper sheds light on universal coding with respect to classes of memoryless sources over a countable alphabet defined by an envelope function with finite and non-decreasing hazard rate. We prove that the auto-censuring AC code introduced by Bontemps (2011) is adaptive with respect to the collection of such classes. The analysis builds on the tight characterization of universal redundancy rate in terms of metric entropy % of small source classes by Opper and Haussler (1997) and on a careful analysis of the performance of the AC-coding algorithm. The latter relies on non-asymptotic bounds for maxima of samples from discrete distributions with finite and non-decreasing hazard rate

    Adaptive compression against a countable alphabet

    Get PDF
    International audienceThis paper sheds light on universal coding with respect to classes of memoryless sources over a countable alphabet defined by an envelope function with finite and non-decreasing hazard rate. We prove that the auto-censuring (AC) code introduced by Bontemps (2011) is adaptive with respect to the collection of such classes. The analysis builds on the tight characterization of universal redundancy rate in terms of metric entropy by Haussler and Opper (1997) and on a careful analysis of the performance of the AC-coding algorithm. The latter relies on non-asymptotic bounds for maxima of samples from discrete distributions with finite and non-decreasing hazard rate

    Statistiques discrètes et Statistiques bayésiennes en grande dimension

    No full text
    In this PhD thesis we present the results we obtained in three linked fields: data compression for infinite alphabets; infinite-dimensinal Bayesian Statistics; multivariate multinomial mixture models. The first point deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of stationary memoryless sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes. An equivalent of the minimax redundancy of such classes is obtained. Then an approximately maximin prior distribution is provided and an adaptive algorithm is proposed, whose maximum redundancy is equivalent to the minimax redundancy. The next works deal with the asymptotic normality of a-posteriori distributions (Bernstein-von Mises theorems) in several nonparametric and semiparametric frameworks. First, in Gaussian linear regression models when the number of regressors increases with the sample size. Two kinds of Bernstein-von Mises Theorems are obtained in this framework: nonparametric theorems for the parameter itself, and semiparametric theorems for functionals of the parameter. We apply them to the Gaussian sequence model and to the regression of functions in Sobolev and Hölder regularity classes, in which we get the minimax convergence rates. Adaptivity is reached for the Bayesian estimators of functionals in our applications. We also get a nonparametric Bernstein-von Mises theorem for increasing-dimensional exponential models. In the last part of our work we consider the problem of estimating the number of components and the relevant variables in a multivariate multinomial mixture, in order to perform an unsupervised classification. Such models arise in particular when dealing with multilocus genotypic data. A new penalized maximum likelihood criterion is proposed, and a non-asymptotic oracle inequality is obtained. The criterion used in practice needs a calibration thanks to the slope heuristics, in an automatic data-driven procedure. Using simulated data, we found that this procedure improves the performances of the selection procedure with respect to classical criteria such as BIC and AIC. The procedures are implemented in a free-of-charge software.Dans cette thèse de doctorat, nous présentons les travaux que nous avons effectués dans trois directions reliées : la compression de données en alphabet infini, les statistiques bayésiennes en dimension infinie, et les mélanges de distributions discrètes multivariées. Dans le cadre de la compression de données sans perte, nous nous sommes intéressé à des classes de sources stationnaires sans mémoire sur un alphabet infini, définies par une condition d'enveloppe à décroissance exponentielle sur les distributions marginales. Un équivalent de la redondance minimax de ces classes a été obtenue. Un algorithme approximativement minimax ainsi que des a-priori approximativement les moins favorables, basés sur l'a-priori de Jeffreys en alphabet fini, ont en outre été proposés. Le deuxième type de travaux porte sur la normalité asymptotique des distributions a-posteriori (théorèmes de Bernstein-von Mises) dans différents cadres non-paramétriques et semi-paramétriques. Tout d'abord, dans un cadre de régression gaussienne lorsque le nombre de régresseurs augmente avec la taille de l'échantillon. Les théorèmes non-paramétriques portent sur les coefficients de régression, tandis que les théorèmes semi-paramétriques portent sur des fonctionnelles de la fonction de régression. Dans nos applications au modèle de suites gaussiennes et à la régression de fonctions appartenant à des classe de Sobolev ou de régularité hölderiennes, nous obtenons simultanément le théorème de Bernstein-von Mises et la vitesse d'estimation fréquentiste minimax. L'adaptativité est atteinte pour l'estimation de fonctionnelles dans ces applications. Par ailleurs nous présentons également un théorème de Bernstein-von Mises non-paramétrique pour des modèles exponentiels de dimension croissante. Enfin, le dernier volet de ce travail porte sur l'estimation du nombre de composantes et des variables pertinentes dans des modèles de mélange de lois multinomiales multivariées, dans une optique de classification non supervisée. Ce type de modèles est utilisé par exemple pour traiter des données génotypiques. Un critère du maximum de vraisemblance pénalisé est proposé, et une inégalité oracle non-asymptotique est obtenue. Le critère retenu en pratique comporte une calibration grâce à l'heuristique de pente. Ses performances sont meilleurs que celles des critères classiques BIC et AIC sur des données simulées. L'ensemble des procédures est implémenté dans un logiciel librement accessible
    corecore